51
51
Sep 19, 2013
09/13
by
Marcus Hutter
texts
eye 51
favorite 0
comment 0
The Minimum Description Length (MDL) principle selects the model that has the shortest code for data plus model. We show that for a countable class of models, MDL predictions are close to the true distribution in a strong sense. The result is completely general. No independence, ergodicity, stationarity, identifiability, or other assumption on the model class need to be made. More formally, we show that for any countable class of models, the distributions selected by MDL (or MAP) asymptotically...
Source: http://arxiv.org/abs/0909.4588v1
58
58
Sep 19, 2013
09/13
by
Marcus Hutter
texts
eye 58
favorite 0
comment 0
The provably asymptotically fastest algorithm within a factor of 5 for formally described problems will be constructed. The main idea is to enumerate all programs provably equivalent to the original problem by enumerating all proofs. The algorithm could be interpreted as a generalization and improvement of Levin search, which is, within a multiplicative constant, the fastest algorithm for inverting functions. Blum's speed-up theorem is avoided by taking into account only programs for which a...
Source: http://arxiv.org/abs/cs/0102018v1
58
58
Sep 18, 2013
09/13
by
Marcus Hutter
texts
eye 58
favorite 0
comment 0
We derive an exact and efficient Bayesian regression algorithm for piecewise constant functions of unknown segment number, boundary location, and levels. It works for any noise and segment level prior, e.g. Cauchy which can handle outliers. We derive simple but good estimates for the in-segment variance. We also propose a Bayesian regression curve as a better way of smoothing data without blurring boundaries. The Bayesian approach also allows straightforward determination of the evidence, break...
Source: http://arxiv.org/abs/math/0606315v1
39
39
Sep 21, 2013
09/13
by
Marcus Hutter
texts
eye 39
favorite 0
comment 0
This paper studies sequence prediction based on the monotone Kolmogorov complexity Km=-log m, i.e. based on universal deterministic/one-part MDL. m is extremely close to Solomonoff's universal prior M, the latter being an excellent predictor in deterministic as well as probabilistic environments, where performance is measured in terms of convergence of posteriors or losses. Despite this closeness to M, it is difficult to assess the prediction quality of m, since little is known about the...
Source: http://arxiv.org/abs/cs/0508043v1
57
57
Jul 20, 2013
07/13
by
Marcus Hutter
texts
eye 57
favorite 0
comment 0
Numerical and anaytical studies of the instanton liquid model have allowed the determination of many hadronic parameters during the last 13 years. Most part of this thesis is devoted to the extension of the analytical methods. The meson correlation (polarization) functions are calculated in the instanton liquid model including dynamical quark loops. The correlators are plotted and masses and couplings of the $\sigma$, $\rho$, $\omega$, $a1$ and $f1$ are obtained from a spectral fit. A separated...
Source: http://arxiv.org/abs/hep-ph/0107098v1
80
80
Sep 18, 2013
09/13
by
Marcus Hutter
texts
eye 80
favorite 0
comment 0
Decision theory formally solves the problem of rational agents in uncertain worlds if the true environmental probability distribution is known. Solomonoff's theory of universal induction formally solves the problem of sequence prediction for unknown distribution. We unify both theories and give strong arguments that the resulting universal AIXI model behaves optimal in any computable environment. The major drawback of the AIXI model is that it is uncomputable. To overcome this problem, we...
Source: http://arxiv.org/abs/cs/0012011v1
53
53
Sep 19, 2013
09/13
by
Marcus Hutter
texts
eye 53
favorite 0
comment 0
This article is a brief guide to the field of algorithmic information theory (AIT), its underlying philosophy, and the most important concepts. AIT arises by mixing information theory and computation theory to obtain an objective and absolute notion of information in an individual object, and in so doing gives rise to an objective and robust notion of randomness of individual objects. This is in contrast to classical information theory that is based on random variables and communication, and...
Source: http://arxiv.org/abs/cs/0703024v1
52
52
Sep 21, 2013
09/13
by
Marcus Hutter
texts
eye 52
favorite 0
comment 0
Various optimality properties of universal sequence predictors based on Bayes-mixtures in general, and Solomonoff's prediction scheme in particular, will be studied. The probability of observing $x_t$ at time $t$, given past observations $x_1...x_{t-1}$ can be computed with the chain rule if the true generating distribution $\mu$ of the sequences $x_1x_2x_3...$ is known. If $\mu$ is unknown, but known to belong to a countable or continuous class $\M$ one can base ones prediction on the...
Source: http://arxiv.org/abs/cs/0311014v1
42
42
Sep 18, 2013
09/13
by
Marcus Hutter
texts
eye 42
favorite 0
comment 0
The QCD gauge field is modeled as an ensemble of statistically independent selfdual and antiselfdual regions. This model is motivated from instanton physics. The scale anomaly then allows to relate the topological susceptibility to the gluon condensate. With the help of Wittens formula for m_eta' and an estimate of the suppression of the gluon condensate due to light quarks the mass of the eta' can be related to f_pi and the physical gluon condensate. We get the quite satisfactory value...
Source: http://arxiv.org/abs/hep-ph/9509401v1
14
14
Feb 22, 2019
02/19
by
Fritz Hutter
audio
eye 14
favorite 0
comment 0
49
49
Sep 17, 2013
09/13
by
Marcus Hutter
texts
eye 49
favorite 0
comment 0
Walley's Imprecise Dirichlet Model (IDM) for categorical data overcomes several fundamental problems which other approaches to uncertainty suffer from. Yet, to be useful in practice, one needs efficient ways for computing the imprecise=robust sets or intervals. The main objective of this work is to derive exact, conservative, and approximate, robust and credible interval estimates under the IDM for a large class of statistical estimators, including the entropy and mutual information.
Source: http://arxiv.org/abs/math/0305121v1
3
3.0
Oct 10, 2020
10/20
by
Hutter, Heribert
texts
eye 3
favorite 1
comment 0
189 p. : 23 cm
Topic: Art -- History -- Outlines, syllabi, etc
40
40
Sep 18, 2013
09/13
by
Marcus Hutter
texts
eye 40
favorite 0
comment 0
Consider an agent interacting with an environment in cycles. In every interaction cycle the agent is rewarded for its performance. We compare the average reward U from cycle 1 to m (average value) with the future discounted reward V from cycle k to infinity (discounted value). We consider essentially arbitrary (non-geometric) discount sequences and arbitrary reward sequences (non-MDP environments). We show that asymptotically U for m->infinity and V for k->infinity are equal, provided...
Source: http://arxiv.org/abs/cs/0605040v1
124
124
Sep 17, 2013
09/13
by
Marcus Hutter
texts
eye 124
favorite 0
comment 0
Increasingly encompassing models have been suggested for our world. Theories range from generally accepted to increasingly speculative to apparently bogus. The progression of theories from ego- to geo- to helio-centric models to universe and multiverse theories and beyond was accompanied by a dramatic increase in the sizes of the postulated worlds, with humans being expelled from their center to ever more remote and random locations. Rather than leading to a true theory of everything, this...
Source: http://arxiv.org/abs/0912.5434v2
44
44
Sep 24, 2013
09/13
by
Marcus Hutter
texts
eye 44
favorite 0
comment 0
In evolutionary algorithms, the fitness of a population increases with time by mutating and recombining individuals and by a biased selection of more fit individuals. The right selection pressure is critical in ensuring sufficient optimization progress on the one hand and in preserving genetic diversity to be able to escape from local optima on the other. We propose a new selection scheme, which is uniform in the fitness values. It generates selection pressure towards sparsely populated fitness...
Source: http://arxiv.org/abs/cs/0103015v1
69
69
Feb 22, 2010
02/10
by
Hutter, Lester
texts
eye 69
favorite 2
comment 0
51
51
Sep 19, 2013
09/13
by
Marcus Hutter
texts
eye 51
favorite 0
comment 0
An algorithm $M$ is described that solves any well-defined problem $p$ as quickly as the fastest algorithm computing a solution to $p$, save for a factor of 5 and low-order additive terms. $M$ optimally distributes resources between the execution of provably correct $p$-solving programs and an enumeration of all proofs, including relevant proofs of program correctness and of time bounds on program runtimes. $M$ avoids Blum's speed-up theorem by ignoring programs without correctness proof. $M$...
Source: http://arxiv.org/abs/cs/0206022v1
70
70
Sep 23, 2013
09/13
by
Marcus Hutter
texts
eye 70
favorite 1
comment 0
The technological singularity refers to a hypothetical scenario in which technological advances virtually explode. The most popular scenario is the creation of super-intelligent algorithms that recursively create ever higher intelligences. It took many decades for these ideas to spread from science fiction to popular science magazines and finally to attract the attention of serious philosophers. David Chalmers' (JCS 2010) article is the first comprehensive philosophical analysis of the...
Source: http://arxiv.org/abs/1202.6177v1
118
118
Sep 19, 2013
09/13
by
Marcus Hutter
texts
eye 118
favorite 0
comment 0
Sequential decision theory formally solves the problem of rational agents in uncertain worlds if the true environmental prior probability distribution is known. Solomonoff's theory of universal induction formally solves the problem of sequence prediction for unknown prior distribution. We combine both ideas and get a parameter-free theory of universal Artificial Intelligence. We give strong arguments that the resulting AIXI model is the most intelligent unbiased agent possible. We outline how...
Source: http://arxiv.org/abs/cs/0701125v1
46
46
Feb 21, 2019
02/19
by
Fritz Hutter
audio
eye 46
favorite 0
comment 0
Die letzte Coverstory des Ende 2018 verblichenen Sportmagazins erzählt vom letzten Chefredakteur
Topic: sport
41
41
Sep 23, 2013
09/13
by
Marcus Hutter
texts
eye 41
favorite 0
comment 0
Given i.i.d. data from an unknown distribution, we consider the problem of predicting future items. An adaptive way to estimate the probability density is to recursively subdivide the domain to an appropriate data-dependent granularity. A Bayesian would assign a data-independent prior probability to "subdivide", which leads to a prior over infinite(ly many) trees. We derive an exact, fast, and simple inference algorithm for such a prior, for the data evidence, the predictive...
Source: http://arxiv.org/abs/0903.5342v1
45
45
Sep 22, 2013
09/13
by
Marcus Hutter
texts
eye 45
favorite 0
comment 0
We introduce a new principle for model selection in regression and classification. Many regression models are controlled by some smoothness or flexibility or complexity parameter c, e.g. the number of neighbors to be averaged over in k nearest neighbor (kNN) regression or the polynomial degree in regression with polynomials. Let f_D^c be the (best) regressor of complexity c on data D. A more flexible regressor can fit more data D' well than a more rigid one. If something (here small loss) is...
Source: http://arxiv.org/abs/math/0702804v1
57
57
Sep 22, 2013
09/13
by
Marcus Hutter
texts
eye 57
favorite 0
comment 0
Feature Markov Decision Processes (PhiMDPs) are well-suited for learning agents in general environments. Nevertheless, unstructured (Phi)MDPs are limited to relatively simple environments. Structured MDPs like Dynamic Bayesian Networks (DBNs) are used for large-scale real-world problems. In this article I extend PhiMDP to PhiDBN. The primary contribution is to derive a cost criterion that allows to automatically extract the most relevant features from the environment, leading to the...
Source: http://arxiv.org/abs/0812.4581v1
37
37
Sep 18, 2013
09/13
by
Marcus Hutter
texts
eye 37
favorite 0
comment 0
We give a brief introduction to the AIXI model, which unifies and overcomes the limitations of sequential decision theory and universal Solomonoff induction. While the former theory is suited for active agents in known environments, the latter is suited for passive prediction of unknown environments.
Source: http://arxiv.org/abs/cs/0306091v2
50
50
Sep 18, 2013
09/13
by
Marcus Hutter
texts
eye 50
favorite 0
comment 0
The proton form factors are reduced to vacuum correlators of 4 quark fields by assuming independent constituent quarks. The axial singlet quark and gluonic form factors are calculated in the instanton liquid model. A discussion of gauge(in)dependence is given.
Source: http://arxiv.org/abs/hep-ph/9509402v1
415
415
Dec 25, 2014
12/14
by
Hutter, Herman
texts
eye 415
favorite 0
comment 0
Mode of access: Internet
Topics: bub_upload, Germany. Heer
Source: http://books.google.com/books?id=WeygAAAAMAAJ&hl=&source=gbs_api
148
148
Jul 2, 2020
07/20
by
Elias Hutter
texts
eye 148
favorite 1
comment 0
The Nuremberg Polyglot by Elias Hutter in 1599. It ranks among the scarcest books in bibliography. It contains the New Testament in the following 12 languages - Syriac, Hebrew, Greek, Latin, German, Bohemian, Italian, Spanish, Gallic, English, Danish, and Polish. The Nuremberg Polyglot was developed from Protestant Bibles, except for the Latin which was from Jerome's corrupt Latin Vulgate, in order to contrast the difference with the others. However, Hutter changed the texts of the Latin,...
Topics: Nuremberg, Polyglot, Bible, New Testament, Hutter, 1599
49
49
Sep 17, 2013
09/13
by
Marcus Hutter
texts
eye 49
favorite 0
comment 0
Solomonoff unified Occam's razor and Epicurus' principle of multiple explanations to one elegant, formal, universal theory of inductive inference, which initiated the field of algorithmic information theory. His central result is that the posterior of his universal semimeasure M converges rapidly to the true sequence generating posterior mu, if the latter is computable. Hence, M is eligible as a universal predictor in case of unknown mu. We investigate the existence and convergence of...
Source: http://arxiv.org/abs/cs/0305052v1
332
332
Jul 9, 2010
07/10
by
HUTTER, U
texts
eye 332
favorite 0
comment 0
IN INSTALLING GROUPS OF WINDMILL GENERATORS TO PRODUCE ELECTRIC POWER FROM THE FORCE OF THE WIND, IT IS IMPORTANT TO LOCATE THE UNITS OF SUCH A NETWORK IN SUCH FASHION THAT THE SO-CALLED TWO-MINUTE VARIATION OF THE WIND VELOCITY CAN BE OVERCOME. THIS IS DONE BY USING AT LEAST THREE WINDMILL GENERATORS LOCATED AN APPROPRIATE DISTANCE APART. WHEN THE WIND VELOCITY IS INSUFFICIENTLY GREAT TO DRIVE THE BLADES OF THE WINDMILLS, A SOURCE OF POWER SHOULD BE AVAILABLE (BATTERY, POWER FROM OTHER...
Topics: ABLATIVE MATERIALS, CERAMIC COATINGS, HEAT SHIELDING, SPACE ENVIRONMENT SIMULATION, COMPOSITE...
39
39
Sep 18, 2013
09/13
by
Marcus Hutter
texts
eye 39
favorite 0
comment 0
This paper studies sequence prediction based on the monotone Kolmogorov complexity Km=-log m, i.e. based on universal deterministic/one-part MDL. m is extremely close to Solomonoff's prior M, the latter being an excellent predictor in deterministic as well as probabilistic environments, where performance is measured in terms of convergence of posteriors or losses. Despite this closeness to M, it is difficult to assess the prediction quality of m, since little is known about the closeness of...
Source: http://arxiv.org/abs/cs/0306036v1
37
37
Sep 19, 2013
09/13
by
Marcus Hutter
texts
eye 37
favorite 0
comment 0
Various QCD correlators are calculated in the instanton liquid model in zeromode approximation and $1/N_c$ expansion. Previous works are extended by including dynamical quark loops. In contrast to the original "perturbative" $1/N_c$ expansion not all quark loops are suppressed. In the flavor singlet meson correlators a chain of quark bubbles survives the $N_c\to\infty$ limit causing a massive $\eta^\prime$ in the pseudoscalar correlator while keeping massless pions in the triplet...
Source: http://arxiv.org/abs/hep-ph/9501245v1
2
2.0
Jun 30, 2018
06/18
by
Marcus Hutter
texts
eye 2
favorite 0
comment 0
We consider the problem of converting offline estimators into an online predictor or estimator with small extra regret. Formally this is the problem of merging a collection of probability measures over strings of length 1,2,3,... into a single probability measure over infinite sequences. We describe various approaches and their pros and cons on various examples. As a side-result we give an elementary non-heuristic purely combinatoric derivation of Turing's famous estimator. Our main technical...
Topics: Computation, Statistics, Mathematics, Computing Research Repository, Information Theory, Statistics...
Source: http://arxiv.org/abs/1407.3334
53
53
Sep 20, 2013
09/13
by
Marcus Hutter
texts
eye 53
favorite 0
comment 0
The probability of observing $x_t$ at time $t$, given past observations $x_1...x_{t-1}$ can be computed with Bayes' rule if the true generating distribution $\mu$ of the sequences $x_1x_2x_3...$ is known. If $\mu$ is unknown, but known to belong to a class $M$ one can base ones prediction on the Bayes mix $\xi$ defined as a weighted sum of distributions $\nu\in M$. Various convergence results of the mixture posterior $\xi_t$ to the true posterior $\mu_t$ are presented. In particular a new...
Source: http://arxiv.org/abs/cs/0301014v1
37
37
Sep 23, 2013
09/13
by
Marcus Hutter
texts
eye 37
favorite 0
comment 0
Given i.i.d. data from an unknown distribution, we consider the problem of predicting future items. An adaptive way to estimate the probability density is to recursively subdivide the domain to an appropriate data-dependent granularity. A Bayesian would assign a data-independent prior probability to "subdivide", which leads to a prior over infinite(ly many) trees. We derive an exact, fast, and simple inference algorithm for such a prior, for the data evidence, the predictive...
Source: http://arxiv.org/abs/math/0411515v1
54
54
Sep 22, 2013
09/13
by
Marcus Hutter
texts
eye 54
favorite 0
comment 0
Solomonoff's uncomputable universal prediction scheme $\xi$ allows to predict the next symbol $x_k$ of a sequence $x_1...x_{k-1}$ for any Turing computable, but otherwise unknown, probabilistic environment $\mu$. This scheme will be generalized to arbitrary environmental classes, which, among others, allows the construction of computable universal prediction schemes $\xi$. Convergence of $\xi$ to $\mu$ in a conditional mean squared sense and with $\mu$ probability 1 is proven. It is shown that...
Source: http://arxiv.org/abs/cs/0106036v1
73
73
Sep 21, 2013
09/13
by
Marcus Hutter
texts
eye 73
favorite 0
comment 0
The Bayesian framework is a well-studied and successful framework for inductive reasoning, which includes hypothesis testing and confirmation, parameter estimation, sequence prediction, classification, and regression. But standard statistical guidelines for choosing the model class and prior are not always available or fail, in particular in complex situations. Solomonoff completed the Bayesian framework by providing a rigorous, unique, formal, and universal choice for the model class and the...
Source: http://arxiv.org/abs/0709.1516v1
59
59
Sep 18, 2013
09/13
by
Marcus Hutter
texts
eye 59
favorite 0
comment 0
While statistics focusses on hypothesis testing and on estimating (properties of) the true sampling distribution, in machine learning the performance of learning algorithms on future data is the primary issue. In this paper we bridge the gap with a general principle (PHI) that identifies hypotheses with best predictive performance. This includes predictive point and interval estimation, simple and composite hypothesis testing, (mixture) model selection, and others as special cases. For concrete...
Source: http://arxiv.org/abs/0809.1270v1
57
57
Sep 18, 2013
09/13
by
Marcus Hutter
texts
eye 57
favorite 0
comment 0
This encyclopedic article gives a mini-introduction into the theory of universal learning, founded by Ray Solomonoff in the 1960s and significantly developed and extended in the last decade. It explains the spirit of universal learning, but necessarily glosses over technical subtleties.
Source: http://arxiv.org/abs/1102.2467v1
30
30
Sep 19, 2013
09/13
by
Marcus Hutter
texts
eye 30
favorite 0
comment 0
The gluon propagator is calculated in the instanton background in a form appropriate for extracting the momentum dependent gluon mass. In background-$\xi$-gauge we get for the mass $400$MeV for small $p^2$ independent of the gauge parameter $\xi$.
Source: http://arxiv.org/abs/hep-ph/9501335v2
2
2.0
Jun 30, 2018
06/18
by
Marcus Hutter
texts
eye 2
favorite 0
comment 0
We consider a Reinforcement Learning setup where an agent interacts with an environment in observation-reward-action cycles without any (esp.\ MDP) assumptions on the environment. State aggregation and more generally feature reinforcement learning is concerned with mapping histories/raw-states to reduced/aggregated states. The idea behind both is that the resulting reduced process (approximately) forms a small stationary finite-state MDP, which can then be efficiently solved or learnt. We...
Topics: Computing Research Repository, Artificial Intelligence, Learning
Source: http://arxiv.org/abs/1407.3341
49
49
Sep 20, 2013
09/13
by
Marcus Hutter
texts
eye 49
favorite 0
comment 0
The problem of making sequential decisions in unknown probabilistic environments is studied. In cycle $t$ action $y_t$ results in perception $x_t$ and reward $r_t$, where all quantities in general may depend on the complete history. The perception $x_t$ and reward $r_t$ are sampled from the (reactive) environmental probability distribution $\mu$. This very general setting includes, but is not limited to, (partial observable, k-th order) Markov decision processes. Sequential decision theory...
Source: http://arxiv.org/abs/cs/0204040v1
39
39
Sep 18, 2013
09/13
by
Marcus Hutter
texts
eye 39
favorite 0
comment 0
The Bayesian framework is ideally suited for induction problems. The probability of observing $x_t$ at time $t$, given past observations $x_1...x_{t-1}$ can be computed with Bayes' rule if the true distribution $\mu$ of the sequences $x_1x_2x_3...$ is known. The problem, however, is that in many cases one does not even have a reasonable estimate of the true distribution. In order to overcome this problem a universal distribution $\xi$ is defined as a weighted sum of distributions $\mu_i\inM$,...
Source: http://arxiv.org/abs/cs/0101019v2
47
47
Sep 19, 2013
09/13
by
Marcus Hutter
texts
eye 47
favorite 0
comment 0
Solomonoff sequence prediction is a scheme to predict digits of binary strings without knowing the underlying probability distribution. We call a prediction scheme informed when it knows the true probability distribution of the sequence. Several new relations between universal Solomonoff sequence prediction and informed prediction and general probabilistic prediction schemes will be proved. Among others, they show that the number of errors in Solomonoff prediction is finite for computable...
Source: http://arxiv.org/abs/cs/9912008v2
34
34
Sep 21, 2013
09/13
by
Marcus Hutter
texts
eye 34
favorite 0
comment 0
Walley's Imprecise Dirichlet Model (IDM) for categorical i.i.d. data extends the classical Dirichlet model to a set of priors. It overcomes several fundamental problems which other approaches to uncertainty suffer from. Yet, to be useful in practice, one needs efficient ways for computing the imprecise=robust sets or intervals. The main objective of this work is to derive exact, conservative, and approximate, robust and credible interval estimates under the IDM for a large class of statistical...
Source: http://arxiv.org/abs/0901.4137v1
22
22
Apr 4, 2019
04/19
by
Hutter, Andreas
texts
eye 22
favorite 0
comment 0
34 unnumbered pages, 350 pages, 14 unnumbered pages, 1 unnumbered leaf of plates : 16 cm (8vo)
Topics: Surgical Procedures, Operative, Military Medicine
56
56
Sep 18, 2013
09/13
by
Marcus Hutter
texts
eye 56
favorite 0
comment 0
Solomonoff completed the Bayesian framework by providing a rigorous, unique, formal, and universal choice for the model class and the prior. We discuss in breadth how and in which sense universal (non-i.i.d.) sequence prediction solves various (philosophical) problems of traditional Bayesian sequence prediction. We show that Solomonoff's model possesses many desirable properties: Fast convergence and strong bounds, and in contrast to most classical continuous prior densities has no zero...
Source: http://arxiv.org/abs/cs/0605009v1
54
54
Sep 22, 2013
09/13
by
Marcus Hutter
texts
eye 54
favorite 0
comment 0
General purpose intelligent learning agents cycle through (complex,non-MDP) sequences of observations, actions, and rewards. On the other hand, reinforcement learning is well-developed for small finite state Markov Decision Processes (MDPs). So far it is an art performed by human designers to extract the right state representation out of the bare observations, i.e. to reduce the agent setup to the MDP framework. Before we can think of mechanizing this search for suitable MDPs, we need a formal...
Source: http://arxiv.org/abs/0812.4580v1
1,127
1.1K
Oct 3, 2013
10/13
by
Jurg Hutter
texts
eye 1,127
favorite 0
comment 0
Contents: Basic Quantum Mechanics; Basic Mathematical Review; Molecular Hamiltonian; Two-Electron Systems and Spin; Hartree{Fock Approximation; Molecular Orbital Theory; Correlation Energy; Coupled Cluster Approaches; Moller-Plesset Perturbation Theory; Density Functional Theory: Part I; Density Functional Theory: Part II, Density Functional Theory: Part III; Molecular Properties; NMR Chemical Shielding Lecture Notes Collection FreeScience.info ID2459 Obtained from...
Topics: Computational Chemistry, "
1,006
1.0K
Nov 12, 2006
11/06
by
Catherine Hutter
texts
eye 1,006
favorite 0
comment 0
352
352
Nov 4, 2014
11/14
by
Leonhard Hutter
texts
eye 352
favorite 0
comment 0
Topic: bub_upload
Source: http://books.google.com/books?id=0XrsXfgwhO8C&hl=&source=gbs_api